116 research outputs found

    OpenKnowledge at work: exploring centralized and decentralized information gathering in emergency contexts

    Get PDF
    Real-world experience teaches us that to manage emergencies, efficient crisis response coordination is crucial; ICT infrastructures are effective in supporting the people involved in such contexts, by supporting effective ways of interaction. They also should provide innovative means of communication and information management. At present, centralized architectures are mostly used for this purpose; however, alternative infrastructures based on the use of distributed information sources, are currently being explored, studied and analyzed. This paper aims at investigating the capability of a novel approach (developed within the European project OpenKnowledge1) to support centralized as well as decentralized architectures for information gathering. For this purpose we developed an agent-based e-Response simulation environment fully integrated with the OpenKnowledge infrastructure and through which existing emergency plans are modelled and simulated. Preliminary results show the OpenKnowledge capability of supporting the two afore-mentioned architectures and, under ideal assumptions, a comparable performance in both cases

    Predicting the content of peer-to-peer interactions

    Get PDF
    Software agents interact to solve tasks, the details of which need to be described in a language understandable by all the actors involved. Ontologies provide a formalism for defining both the domain of the task and the terminology used to describe it. However, finding a shared ontology has proved difficult: different institutions and developers have different needs and formalise them in different ontologies. In a closed environment it is possible to force all the participants to share the same ontology, while in open and distributed environments ontology mapping can provide interoperability between heterogeneous interacting actors. However, conventional mapping systems focus on acquiring static information, and on mapping whole ontologies, which is infeasible in open systems. This thesis shows a different approach to the problem of heterogeneity. It starts from the intuitive idea that when similar situations arise, similar interactions are performed. If the interactions between actors are specified in formal scripts, shared by all the participants, then when the same situation arises, the same script is used. The main hypothesis that this thesis aims to demonstrate is that by analysing different runs of these scripts it is possible to create a statistical model of the interactions, that reflect the frequency of terms in messages and of ontological relations between terms in different messages. The model is then used during a run of a known interaction to compute the probability distribution for terms in received messages. The probability distribution provides additional information, contextual to the interaction, that can be used by a traditional ontology matcher in order to improve efficiency, by reducing the comparisons to the most likely ones given the context, and possibly both recall and precision, in particular helping disambiguation. The ability to create a model that reflects real phenomena in this sort of environment is evaluated by analysing the quality of the predictions, in particular verifying how various features of the interactions, such as their non-stationarity, affect the predictions. The actual improvements to a matcher we developed are also evaluated. The overall results are very promising, as using the predictor can lower the overall computation time for matching by ten times, while maintaining or in some cases improving recall and precision

    Enabling Information Gathering Patterns for Emergency Response with the OpenKnowledge System

    Get PDF
    Today's information systems must operate effectively within open and dynamic environments. This challenge becomes a necessity for crisis management systems. In emergency contexts, in fact, a large number of actors need to collaborate and coordinate in the disaster scenes by exchanging and reporting information with each other and with the people in the control room. In such open settings, coordination technologies play a crucial role in supporting mobile agents located in areas prone to sudden changes with adaptive and flexible interaction patterns. Research efforts in different areas are converging to devise suitable mechanisms for process coordination: specifically, current results on service-oriented computing and multi-agent systems are being integrated to enable dynamic interaction among autonomous components in large, open systems. This work focuses on the exploitation and evaluation of the OpenKnowledge framework to support different information-gathering patterns in emergency contexts. The OpenKnowledge (OK) system has been adopted to model and simulate possible emergency plans. The Lightweight Coordination Calculus (LCC) is used to specify interaction models, which are published, discovered and executed by the OK distributed infrastructure in order to simulate peer interactions. A simulation environment fully integrated with the OK system has been developed to: (1) evaluate whether such infrastructure is able to support different models of information-sharing, e.g., centralized and decentralized patterns of interaction; (2) investigate under which conditions the OK paradigm, exploited in its decentralized nature, can improve the performance of more conventional centralized approaches. Preliminary results show the capability of the OK system in supporting the two afore-mentioned patterns and, under ideal assumptions, a comparable performance in both cases

    Ottimizzazione di percorsi di rete. Un'applicazione al sistema di sentieri montani del Trentino

    Get PDF

    OpenKnowledge for peer-to-peer experimentation in protein identification by MS/MS

    Get PDF
    Background: Traditional scientific workflow platforms usually run individual experiments with little evaluation and analysis of performance as required by automated experimentation in which scientists are being allowed to access numerous applicable workflows rather than being committed to a single one. Experimental protocols and data under a peer-to-peer environment could potentially be shared freely without any single point of authority to dictate how experiments should be run. In such environment it is necessary to have mechanisms by which each individual scientist (peer) can assess, locally, how he or she wants to be involved with others in experiments. This study aims to implement and demonstrate simple peer ranking under the OpenKnowledge peer-to-peer infrastructure by both simulated and real-world bioinformatics experiments involving multi-agent interactions. Methods: A simulated experiment environment with a peer ranking capability was specified by the Lightweight Coordination Calculus (LCC) and automatically executed under the OpenKnowledge infrastructure. The peers such as MS/MS protein identification services (including web-enabled and independent programs) were made accessible as OpenKnowledge Components (OKCs) for automated execution as peers in the experiments. The performance of the peers in these automated experiments was monitored and evaluated by simple peer ranking algorithms. Results: Peer ranking experiments with simulated peers exhibited characteristic behaviours, e.g., power law effec

    Implementation of a distributed guideline-based decision support model within a patient-guidance framework

    Get PDF
    We report on new projection engine which was developed in order to implement a distributed guideline-based decision support system (DSS) within the European project MobiGuide.In this model, small portions of the guideline knowledge are projected, i.e. 'downloaded', from a central DSS server to a local DSS in the patient's mobile device, which then applies that knowledge using the mobile device’s local resources. Furthermore, the projection engine generates guideline projections which are adapted to the patient’s previously defined preferences and, implicitly, to the patient’s current context, which is embodied in the projected knowledge. We evaluated this distributed guideline application model for two complex guidelines: one for Gestational Diabetes Mellitus, and one for Atrial Fibrillation. We found that the initial specification of what we refer to as the customized guideline should be in the terms of the distributed DSS, i.e., include two levels: one for the central DSS, and one for the local DSS. In addition, we found significant differences between the customized, distributed versions of the two guidelines, indicating further research directions and possibly additional ways to analyze and characterize guidelines

    Models of Interaction as a Grounding for Peer to Peer Knowledge Sharing

    Get PDF
    Most current attempts to achieve reliable knowledge sharing on a large scale have relied on pre-engineering of content and supply services. This, like traditional knowledge engineering, does not by itself scale to large, open, peer to peer systems because the cost of being precise about the absolute semantics of services and their knowledge rises rapidly as more services participate. We describe how to break out of this deadlock by focusing on semantics related to interaction and using this to avoid dependency on a priori semantic agreement; instead making semantic commitments incrementally at run time. Our method is based on interaction models that are mobile in the sense that they may be transferred to other components, this being a mechanism for service composition and for coalition formation. By shifting the emphasis to interaction (the details of which may be hidden from users) we can obtain knowledge sharing of sufficient quality for sustainable communities of practice without the barrier of complex meta-data provision prior to community formation
    corecore